
Autonomous Transformation: The Strategy Shift for AI
AI is forcing a leadership choice. You can treat it like a stack of use cases and end up with a lot of motion and a little progress. Or you can start with a clear vision of the future you want, make strategy visible, and use that to align decisions across the business.
In this episode, Drew Neisser talks with Brian Evergreen, author of Autonomous Transformation, about why the AI conversation so often collapses into tools and use cases, and how leaders can pull it back to vision, outcomes, and the kind of alignment that drives transformation.
What you’ll take away:
- Why optimization can keep you busy while you stay stuck
- How to make a future vision concrete enough to act on
- What “no strategy without vision” means, and how to spot fake strategy
- Why leaders default to scorecards, and how it stalls transformation
- How Brian’s “nindrant” separates “we can do” from “we need alignment”
- Why use case first AI limits gains, and how to shift to value creation
Plus:
- A simple workshop to surface visions before projects
- A clean split between what marketing can do now and what needs CRO and CFO alignment
- How to move AI from tool talk to a value creation leadership conversation
If you are tired of AI conversations that start with tools and end with small wins, listen to this episode for a vision first approach that changes what you do next.
Renegade Marketers Unite, Episode 508 on YouTube
Resources Mentioned
Highlights
- [1:54] Three mistakes approaching AI and autonomy
- [4:24] Vision first, tools second
- [6:56] Where am I in your AI future?
- [13:24] Why synthesis beats analysis
- [16:18] Create new value with AI agents
- [24:17] Make the vision visceral
- [29:17] Map visions with the “nindrant”
- [33:54] Work backward from the vision
- [38:47] Reframe the 50% cut story
- [45:28] Cuts have a culture cost
- [48:18] Tips for AI and autonomous transformation
Highlighted Quotes
"A use case is the friend of engineering, but the enemy of strategy. Starting with a use case automatically undercuts everything — you end up just doing something not very transformative or valuable."— Brian Evergreen, Autonomous Transformation
“You have a choice as a leader — am I going to try to cut costs with this and post good numbers for a few quarters? Or do we want to be the ones, because we're already there, we already have the resources, we already have the market share, to find new ways to create value with all the capability that's out there today? "— Brian Evergreen, Autonomous Transformation
“The problem with solving problems is yes, it's the craft of getting rid of what you don't want, but it doesn't have anything to do with getting what you do want."— Brian Evergreen, Autonomous Transformation
Full Transcript: Drew Neisser in conversation with Brian Evergreen
Drew: Hello, Renegade Marketers. If this is your first time listening, welcome. If you're a regular listener, welcome back. You're about to listen to an Expert Huddle where our flocking awesome community CMO Huddles gets exclusive access to experts, including the authors of some of the world's best-selling business books. In this episode, we're joined by Brian Evergreen, author of Autonomous Transformation. Brian makes the case that AI transformation isn't a tools problem, it's a leadership problem. He shares why you start with a clear vision for the future, get alignment on what you're trying to change, and measure outcomes that matter, not just efficiency. And he's blunt about this: if the goal is to simply cut headcount, you're missing the point. If you like what you hear, please subscribe to the podcast and leave a review. You'll be supporting our quest to be the number one B2B marketing podcast. All right, let's dive in.
Narrator: Welcome to Renegade Marketers Unite, possibly the best weekly podcast for CMOs and everyone else looking for innovative ways to transform their brand, drive demand, and just plain cut through, proving that B2B does not mean boring to business. Here's your host and Chief Marketing Renegade, Drew Neisser.
Drew: Hello Huddlers, I'm excited to introduce you to Brian Evergreen. He's the author of Autonomous Transformation, a former Microsoft executive, and a systems thinker who's helping leaders rethink the role of AI. Brian and I had a fascinating prep call on this, so I'm super excited about this. Hey, Brian, how are you and where are you today?
Brian: I'm doing fantastic. I am in Seattle, which is where I live, after two weeks in London, but I've been home a couple of days, so hopefully I'm not still jet-lagged, but if I say anything that doesn't make sense, this hopefully explains that.
Drew: Well, we expect you to speak with one, a London accent, and two — you went from, like, rain to rain. I'm thinking—
Brian: That's true, but you know what, I prefer it. I love the evergreen. I love trees. I love nature. So it's great.
Drew: Well, I don't know if we're talking about nature today. We may be anti-nature on this call. But one of the things that we'd like to do on this show is, just in case the audience has to leave early, or we need to convince them to stay — let's jump in with three things marketing leaders get wrong when approaching AI and autonomous transformation. If you'll just list them, we can go through them one at a time.
Brian: Sure. So three things that marketing leaders — and I think leaders more broadly — get wrong. When I think through the lens of marketing leaders, I think one, when it comes to AI — and again, this is not exclusive to marketing leaders — starting with the tool is a big one. So beginning with, you know, "this is the AI tool." I liken this to if you were to start a home remodel by coming in with a certain kind of hammer or a certain kind of saw and saying, "How are we going to make our house better with this?" That's not how we start a remodel. We start with what we actually want to do with the remodel, and then figure out which tools we need — which, I know, Drew will get into more of that. But give me the list so I can sort of get a framework. So number one: starting with a tool is a big one. Number two: not building enough trust and alignment across the organization for whatever it is you want to achieve, and thinking that, hey, from my marketing budget, I've got enough money to go buy this thing, and so I can make that decision — when, if you don't have alignment across the organization, and from a systems-thinking perspective... I'm getting back into it again, aren't I, Drew? Okay. So number two: not having enough trust and alignment. Number one was starting with the tool. Number two was not having enough trust and alignment. And number three — and this is true beyond AI, but it's become even more true for AI — is measuring the wrong outcome.
Drew: All right. The wrong answer — we don't want to start with a tool. We've got to build alignment. And measuring the wrong outcomes is interesting, and I want to go a little deeper on it. It's funny, because I totally understand what you're saying. Don't start with a tool — of course, you want to start with a strategy, not a tool — but there are so many cool tools out there, and you hear about a tool and you just want to go try it. There is a tendency in this community where we are encouraging our teams to try new things, to expose us to sort of the art of the possible, if you will. And I'm going to give one more pushback on that one alone, which is: if you don't play with a tool, you have no clue as to what they could do anyway. So how — it's a little chicken and egg here — how do we avoid the starting-with-the-tool syndrome, and how do we get to a vision if we don't know what the tools can do?
Brian: Yeah, great question. So what I'd say is, if you wanted to figure out how to fix a car or make a car better, you wouldn't start by saying, "Okay, I'm going to learn everything there is to learn about the transmission, and I'm going to master the transmission, and then I'll be able to make cars better and fix cars, right?" You would say, "Okay, I need to understand how all these pieces work together." And if there are new pieces — like a new type of LiDAR, kind of radar optionality, or something else you might want to add to a car — you would research that, yes, you'd make sure you understood it, but you wouldn't come up with a transmission strategy for improving your car or a LiDAR strategy for improving your car. You would say, "Okay, based on what our car is and what it does and how it moves, maybe we can use this thing to make it even better. Let's take a look at that." And I'd say the same is true for AI. So based on how your marketing department is running, if you say, "Okay, this is where we want to play, this is how we're playing, this is the value we're creating right now for our organization and for the market, the brand we're maintaining, and all the other facets that are important to you" — and then say, "What would, setting all tools aside, what would the best version look like?" If you look at the total addressable value that your organization and your department creates, there's no way you're ever going to completely fill up that circle — but wherever you are, there's room around it that you can move into and say, "Our customers would love it if we could do XYZ thing," or "It would really push our brand forward if we could do ABC thing," or whatever it might be. So determining what that thing is — and to your point, Drew, in order to envision all that might be possible, you do need to have a level of literacy or awareness of what tools are out there. Because if you teleported from the past, from before cell phones or the internet were invented, and you were trying to dream up the future of a marketing department, you would have to catch up on where we're at and what's possible, just in terms of all the tools and everything that's out there today. There is a level of literacy there. Where organizations get stuck is by saying, "Let's go pick up this one specific..." — we did it with blockchain, with the metaverse, now with AI — "Let's pick it up, let's create a whole strategy around it." And that's actually where you fall into a pitfall pretty easily.
Drew: One of the reasons I think a lot of pilots have failed is that, in fact, they have been, "Hey, this thing can do this, so let's build a pilot," and not strategically solving particular business challenges. Okay, so I'm with you on that first one. The second one is not building alignment — and you also started with not building trust and alignment. That's always an issue. I know we talked a lot about that at the Super Huddle — just alignment with your CRO: you won't be successful if you're not aligned there. And obviously, if you're not aligned with where the CEO wants to take the company and how the CFO wants to spend money, you've got a problem. But why is it important in the context of AI adoption?
Brian: Great question. So I'll highlight why it's important by contrasting an area, very recently, where it wasn't. So you're modernizing to the cloud. You have an app that everybody's already using, but it's monolithic architecture — now it's being put into a much lighter, more agile architecture that can do more things and add more features. You don't really need to bring people along. It's like, "Hey, we're upgrading from the less-great experience you had before to a better experience." You don't need alignment really — you need to prove ROI to your CFO, sure. But there's not a huge amount of concern; incrementalism is just fine in that scenario, where you're going to take something you have and make it slightly better. AI flies in the face of that. Because, for one, inside your organizations — and I'd say marketing is one of the top three — there is this pressure from the hype in the market today: "Oh, AI agents and AI is going to take all these jobs," which I'll just say up front, I don't think is true. I don't think is accurate, for a number of reasons, which I'm more than happy to get into. I was working with Fortune 500 companies to build and deploy AI agents before ChatGPT ever even came out. So very much, lot of history there, happy to talk about it. But in terms of the question — in all that context, where people are afraid for their jobs, which means they're not going to perform well or be able to come up with the best ideas and so on, there's also this idea that is essential to a portion of my work, which is that I believe that trust — a shared vision for the future — is a foundation of trust. So what's happening when you have teams across the organization, and especially within your department, that are looking at what you're talking about doing and they're pushing back, a lot of it has to come down to this underlying hidden question, which is: "Where am I in this picture of what you're building toward? What does that mean for me? What does that mean for my job or my future?" And so being able to say, "Hey, this is where we're going, this is where we're headed, this is how we see you winning, CRO. This is how we see you winning, CFO. This is the new family of jobs we think this is going to create, that we're going to have to work with you to create. Hey, over here, CFO, this is how we see the ROI" — obviously, there's got to be some degree of ROI, even if it's longer-term, because of the natural curve of AI. So I think that trust and alignment — and not just in a checking-boxes-of-alignment way — I think there's a more human element, looking each other in the eye: "Do we both see the same vision of the future that we both want to build toward? And are we going to invest our time, energy, and resources to do that?"
Drew: Got it. And it's so funny, because it's the vision thing. And if you're not aligned on the vision thing as a whole — I am curious, as is someone listening in live, but I don't want to spend much time on it — what are the other departments that are also under pressure to deploy AI?
Brian: I said top three, didn't I? Obviously, customer service — which, depending on the organization, rolls up to various places. That's the one I think is getting the most heat. I'd say that's the number one. And you see what Klarna tried, which — I don't know if anyone saw the full life cycle of that — was pretty fun to see.
Drew: By the way, I just want to thank Klarna for making that mistake first, so the rest of us didn't have to. Thank you. But, you know, bleeding edge is called that for a reason.
Brian: Exactly. I actually wrote, in my book in August 2023 — I'd already sent the manuscript out for publishing with Wiley — before the Klarna case study started coming out, right, where he made this first announcement about how they're going to start laying people off. And in my book, I said that basically, in the era of autonomous transformation, the temptation for some organizations will be to go to full autonomy, but the market will rebound, and they'll quickly find that they need to retain human connection where it matters most. And so it's kind of funny, because I said that right around the time he said his thing. A year later, it looked like he was right, because they laid off 700 people and were declaring all these savings. And then another year later — so two years in total — he came back. And for those who aren't familiar with the story, the big article that came out was, you know, Klarna lays off their customer service department and then scrambles and has engineers and marketing managers taking customer calls while they try to rebuild their customer service organization.
Drew: Oops. I can build on that. And this was amazing. And you know, so many people for years have been trying to automate phone trees — we all hate phone trees. When you finally get to a human, there's a moment of joy: "Oh my God, you can solve my problem!" And there are some businesses that have been built the opposite way and build incredible satisfaction. But I know from one company: it didn't matter whether they were calling to complain or with joy or had a problem — every call was an opportunity to make them feel better about their relationship and cement the deal. So the minute you put a bot in there, it better be damn good, otherwise you're going to risk that relationship. But again — thank you, Klarna. All right, we have to finish up on the third one.
Brian: Yeah. So the third one — I mean, when I said top three, I was really thinking it's one of the top, but I guess I didn't think of it specifically — obviously customer service is one. I'd say for third, it's more horizontal, it's across all organizations: the entry-level. And again, I want to be very clear, I'm not predicting what Dario Amodei predicted — that up to 50% of entry-level jobs will be gone in the next five years. I think what we're experiencing right now is because of the threat of recession that's been looming over us for a couple of years, especially with political upheavals and the geopolitical climate that we're in right now — and plus some of the taxation that was taking place, where the code shifted for R&D — I think that's much more to blame for why we've seen so many layoffs and so few entry-level hires. I have yet to see a single company where, if you go look under the hood, they say, "Yeah, we actually have these AI agents doing all these things, so now we don't need to hire entry-level people." That's not what's happening, so it's a false narrative.
Drew: Okay, and I appreciate that. So the last one in this quick summary that we were working on — which is that people are measuring the wrong thing — let's talk about that.
Brian: They're measuring adoption in most cases, or they're measuring — obviously marketing is so difficult because trying to figure out true attribution, especially the larger your organization — so easy at a small organization, right? You talk to this person, you know, they sent you a proposal, you did it, okay, good to go, right? Or, you know, you came inbound from this one website. Versus the larger the organization, there's just so many different ways that they can find, and so many different touch points that aren't tracked. So it's a classic problem. But I would say, from a systems thinking perspective — and you know, you introduced me as a systems thinker, so I have to live up to that — it's very easy to look at, and our instinct for all of us, when analysis was really codified in the 1950s and the systems thinking movement, it's really about taking a subject and breaking it down and trying to understand each individual part and how well it's doing, and then re-aggregating that into how well the overall is. So if you say, "Oh well, how good is our reach? How many impressions are we getting? What level of clicks are we getting on those? And then how many demos are we booking?" Or whatever — those are all functions of analysis, of breaking it down to understand individual parts, to try to add it back up. Like, is the whole working? But you could have a million demos booked a day, you know, a billion impressions a day, and all these things looking off the charts, and yet still not have conversion. And that's because analysis can tell you how something's working, but it can't give you understanding of why it exists or how well it's actually doing. For that, we need synthesis to understand. So in the case of marketing, the role that the marketing function serves in the broader organization — like, what does the sales team need from marketing? What does the customer service team need from marketing? And based on what all those other departments need from marketing, because they're all interdependent, right? You could have the best marketing department in the world, but if the product's terrible, you know, it doesn't matter, right? It's interdependent. And so measuring the interaction between — how well are we understanding and perfectly positioning exactly what that offering is that we're putting in the market — is more important than how many impressions, because you could have a million impressions of the wrong impression, right? And that's happened so many times for so many organizations that I've been a part of and worked with. Where I liken it to: if Beyoncé were going to be in a choir, she would need to degrade her performance. We wouldn't be measuring and saying, "Well, Beyoncé did this well, and everyone else needs to pick up the slack." We would say Beyoncé needs to actually sing below her full capacity for it to all sound good together. And that's not just a marketing problem. It's for every org — every department tends to focus on how well am I doing, and then I'll blame other people if something's going wrong with what they're doing, but when really it's the interaction between all of our organizations that the real performance is actually measured by.
Drew: So I'm wrapping my mind around the measurement, and the measurement that we hear a lot in the community about the usage of AI — and I'll give you an example. One member of our community was looking to help her team be 50% more productive thanks to these tools; it wasn't "we're going to make our team smaller," but it was going to make her team 50% more productive. And so a lot of the emphasis — and I tell that story mainly because the emphasis so far has been in using these tools to be more efficient. We're going to be able to get more content out faster, and not necessarily, "We're going to then use that time to spend it on other things that are, in fact, more productive," and so forth. So we're getting rid of a lot of the lower-level, you know, brainless stuff and adding more time for brainpower, but the pursuit of efficiency alone feels like, to me, falls into this buffet, potentially, of myopia versus, say, using AI to do new things, to create new levels of value. And I'm just curious about your point of view on that is.
Brian: I love that you asked that, because I actually — in the same article where I introduced the idea of total addressable value — I actually created a three-circle Venn diagram. The biggest circle is total addressable value. One of the smaller circles is addressable by humans, but really it's addressable by whatever you're already doing. Now there's a certain amount of the big amount of value you could create that you're already doing, whether it's software, automation, or your people. Then there's another circle that is addressable by AI and AI agents, and there is some overlap, where you could take something you're already doing and say, "How do we do this exact same thing, create this exact same value with AI, and maybe create it more efficiently and faster and cheaper?" Like, yes, there is some of that. But the majority of actual — where the potential value with AI and AI agents is — because it is fundamentally good at different things than people are, and different things than automation; it can be used in automation, but they're not the same. So the majority of the actual value to be found there is in creating new value, like you're saying, Drew, that hasn't existed before. And I'll give you a quick example that I give often: if you have two companies — let's say that you and I, Drew, we each run a construction company. Let's say I live in Seattle. Drew, where are you calling in from today?
Drew: New York City.
Brian: New York City. So you're running a construction company in New York City. I'm running one in Seattle. We each need steel because we're building skyscrapers, let's say, and for some reason we need the exact same amount of steel. I call ABC Steel and I say, "Hey, I need some steel." And they look in their warehouses and their databases of all their warehouses, and they say, "The only warehouse we have that has that steel is in New York City." So we're gonna drop a purchase order, we're gonna get a truck driver, they're gonna drive it across the country, right? You call not ABC Steel but XYZ Steel, and they check their database of all their warehouses, and they say, "Well, we have the steel you need in Seattle." And then now, you know, you see the inefficiency, right? It's so obvious — we're each paying for steel to be shipped all the way across the country, not to mention the carbon impact, the livelihoods of those people that are having to drive all the way across the country and then straight back. So that's the world we live in today, because it would be too inefficient for every steel company to spend time calling all their competitors and everyone else to see, "Do you have steel there? I don't have steel there." Right? It doesn't make sense. In the era of AI and AI agents, we could instead have a marketplace of AI agents that represent these different steel companies — even though they don't have to be that, they don't have to be Microsoft or Amazon or Google; steel companies can do this relatively simply. There'll have to be a version of — like the way that Square enabled everyone to be able to do payments — it'll be a version of that where essentially, when I call up, it actually — they could trade customers behind the scenes, where instead of it being people having to call and check with other people and they're checking a database, it's just at the moment of inquiry it's also interacting and negotiating even with other agents that represent other steel companies, and then they're trading, they're getting referral fees, so we're happier as a customer, there's less waste in the society, and they're still making referral fees and making money, even though they're not having to fulfill that order. So that's an example of the amount of new value that we're — and I think, from a marketing perspective, as we're moving into an era where purchase decisions are going to be made by an agent, where I might say, "Hey, I have a Super Bowl party coming. I want — give me everything I need. I have 11 people coming; two of them are vegetarian, one's vegan, there's one gluten-free, and the rest have no dietary restrictions. I want to throw a really good Super Bowl party — give me everything I need." Now, I might go out of my way if I have a strong enough brand relationship with Coca-Cola and say, "And make sure all the products are Coca-Cola and not Pepsi." Maybe I'll say that, or maybe vice versa, right? But for most — I'd say the average consumer most likely, at that point, is stating the outcome that they want and leaving it up to this group of agents to come up with the recipes. And they might come back to me and say, "Okay, we're going to make a vegan pigs in a blanket, and we're going to make all these things, and here's" — you know, maybe, right? And then I review all that and say, "Okay, that sounds great." I don't care at that point if it's Safeway, Albertsons, or Walmart who's the behind-the-scenes store. I don't care if it's Uber, DoorDash, or some no-name brand doing the delivery. And then I may not care if it's Lay's or which actual brand of the sub-brands of all the different food. I might just care about the outcome and just leave it up to the food — unless I have a negative experience, in which case I'll come back and say, "Hey, never get this food again, because that wasn't great," right? So as our decisions change — at that point, you could have a company perfectly optimizing their conversion rates, and as organizations continue to optimize, let's say in the food and beverage industry, their end-of-aisle — you know, how they're showing up at end of aisle and purchasing the best placement — a bunch of Uber drivers just walking right past because they don't care, because they're filling out a list that's already been predetermined, and now those marketing dollars are wasted, right? So I think that's a long answer to your question, Drew, but I think that's kind of where — just a diatribe, I suppose — on where I think the focus on new value needs to be, a little bit on where the puck is headed, where the consumer is headed. And I think especially for marketers, leading in that, and also recognizing — I think, because we're in such an era where all of our societal and political — and what it means to be human with AI and everything — is just so jumbled, that I think we're back in a need of — we're not in a space of just "let's keep automating whatever we've been doing." I think we need to start new modalities, new ways of connecting, reconnecting with how we — and we're seeing more and more brands obviously do that, and very successfully.
Ad Break: This show is brought to you by CMO Huddles, the only marketing community dedicated to B2B greatness and that donates 1% of revenue to the Global Penguin Society. Why? Well, it turns out that B2B CMOs and penguins have a lot in common. Both are highly curious and remarkable problem solvers. Both prevail in harsh environments by working together with peers, and both are remarkably mediagenic. And just as a group of penguins is called a huddle, our community of over 300 B2B marketing leaders huddle together to gain confidence, colleagues, and coverage. If you're a B2B CMO, why not dive into CMO Huddles by registering for our free starter program on cmohuddles.com? Hope to see you in a Huddle soon.
Drew: Well, perfect transition, because you have this concept in the book about future solving, and that flips how leaders think about planning. Talk a little bit about this, because part of this does take some imagination and some projection and so forth. So let's get into future solving and how that's sort of different from problem solving.
Brian: Well, thank you, Drew. That's one of my favorite things to talk about. So all of us have been taught to be good at problem solving, right? That's like — you go to any conference, everywhere they say, "We'll start with what problem you want to solve." That's the guidance; that's like the prevailing best practice. And in the context of that best practice, 90–95% of AI projects fail. When I was leading AI strategy for Microsoft US, I was flying out to meet with Fortune 500 C-level exec teams to set their AI strategies. And I got fed up with the fact that we were having all the same problems, the low success rates, even though we had tremendous resources, some of the most brilliant people working on these things. And so I said, "Okay, I need to challenge all conventional wisdom that I've heard," and one of them was problem solving. I said, "Okay, is there any issue with this?" And what I realized is — and what I learned through research and learning from people that came even before me and passed away before I even started this research, unfortunately, especially one particular hero, Dr. Ackoff Russell — for anyone who's familiar with his work, he was a Wharton professor that was just phenomenal. But the problem with solving problems is that problem solving is the craft of getting rid of what you don't want. So when you're problem solving, it's an elimination exercise. I find something I don't want, I eliminate it. And it makes sense that we're so focused on that, because that's one of our inheritances from the Industrial Revolution. When a machine goes down, you find the problem, you fix the problem, hopefully the machine comes back. You want to make the machine work better, eliminate more problems. So it's just baked into how we think about everything. But what I'd like to say is that the problem with solving problems is, yes, it's the craft of getting rid of what you don't want, but it doesn't have anything to do with getting what you do want. Back when people were using television and clicking through channels — I know some people still do, but most of us don't — it was actually statistically that if you didn't like what was on the TV and you went up or down, you're actually less likely to see something you want to see than to see yet another thing you don't want to see. So then you're clicking through, you know, 30 channels, until you finally find the thing that you want. So what do you do instead? You go to the TV Guide, or you go to the homepage, you watch as they're scrolling through, you go, "Oh good, that show — yes, I'd like to watch that. Let's go straight to channel 34," right? That's essentially what business leaders need to do today, which is start with what future you want to solve for. Like, what is an ambition, what is a goal that is worth solving for — and make it visceral. So JFK did a great example of this when he said, "By the end of the decade, we're going to land a man on the moon and bring him back safely." Like, when you say "land a man on the moon," he could have said, "By the end of the decade, we will have achieved a moon landing and a safe return" — same thing. But when you say "land a man on the moon," you can almost see the feet thudding against the moon, right? When I work with IT departments, I tell them an example of the kind of future worth solving for would be: "Our work as IT is so good that when shadow IT vendors come to our marketing or come to our business partners, they're laughed out of the room, because they're so happy with our work as IT that it's actually hilarious to them to imagine partnering with shadow IT vendors." Like, that's a visceral goal. It's not just a set of numbers — you can actually imagine the energy that would have to be true for that. So that's the second piece. So you first start with: what is that future we want to solve for? I'll say, as we say that, there is no such thing as a strategy that isn't anchored to a vision. So I wrote an article recently that's coming out next month; it's called "No Strategy Without Vision." So we like to talk about making strategic decisions, but you can't actually make a strategic decision if you don't have a strategy — otherwise it's just a decision. And calling it strategic means it's aligned to the strategy. So if there isn't a strategy, it's not aligned to anything, right? It's like left-aligning, but there's no left — you just keep going, right? So what I recommend to leaders is to start with creating a vision, which is something we haven't been trained to do. So most of us — and I'm sure most of you — in school, we aren't taught there's a "How to Create a Vision 101." It's like, you know, all the basics of marketing, basics of accounting, basics of management, etc. But creating a vision isn't taught in school, and it's almost never taught in organizations either. We're taught how to be good operators, how to be good at executing a scorecard or executing against a goal or something that's been set, but not necessarily creating a vision. So that's actually the work and research I'm doing now — it's part of one of the core pillars, which is how to teach people how to actually create a vision, so that we can have a new era of visionary leadership inside of our organizations and at large. So, yeah, I could go on for a while.
Drew: But so when we talked, we also talked about your nine-box framework. I think it would be good to explain those.
Brian: Yeah, I call it a "nine-drant" — and this is a very nerdy joke, so forgive me, but there's no Latin equivalent of "quadrant" when you get up to nine, so you can only really call it a nine-drant. So I actually — I know you mentioned it'd be helpful if I actually showed it, which I can. So it's part of — I'm pulling this up from a keynote deck. So where I talk about one of the first things you have to do when you're solving for the future is to start with a vision, and I share some of what I just shared with you, which is that creating a vision is a skill, just like program or project management, or writing JavaScript, or anything else is a skill, but it's rarely included in our training or job description. So something that you could do with your team this week — if you can find the time, maybe next week — is to actually go and do a workshop and say, "All right, we've been doing some great things, you know, we have our operational plan locked down, but we want to come up with a vision for the future. Here's the exercise we're going to do: I'd like everyone to write out three visions for the future of either our team, department, or organization, or even market." Right — scope and size, all the way out, it could be the whole market. "What do you think would be worth solving for?" And give them 5–10 minutes to think about it, write them down on sticky notes, but don't show them the nine-drant until after they've written them down. And then once they've written them down, then you show them the nine-drant and say, "Okay, this is a graph where we're going to bucket and organize the visions that we've created." So you can see on the x-axis: "Achievable within my team" — in other words, I don't need anyone else's permission to go do this; I can achieve it right now with the time and the team and the dollars that I have. Then to the right of that is "Achievable within my organization with consensus." That's where you say, "Okay, I as a CMO would actually also need the CRO and maybe the CFO and some other departments to agree to actually do this with us," or maybe it's a product team, and so I'm not the one who could just make the call. And then the third is "Achievable in my industry with a coalition." So that's where, hey, this problem or this goal or this future we're trying to solve for is bigger than just our company — we would need other companies to co-sign, whether that be other companies in the same type of company that we are, or a coopetition kind of arrangement, or our partners, or there would have to be a government policy change. It could be anything that requires more of a coalition; that's on the right. And then you can see on the y-axis: "Not ambitious enough," "Bold and achievable," or "Too ambitious." Obviously, the goal is bold and achievable. And what I always share is you really want to have a portfolio of visions across the middle here, where you have ideas — some of which are, ideally, they're all bold and they're all achievable. Some of them are going to be achievable just with your team. Some you need more of the organization to sign off and partner with you on. And some that might need a bigger, longer industry coalition. And these almost start to map the McKinsey Horizons in a way, based on the length of time they would take. But it's really, really interesting when you do this exercise, because what I do is — once I've explained it to them — I say, "Okay, everyone, we're going to do show and tell. Show the three visions you wrote down and propose for each one where you think it belongs on the nine-drant." And a lot of times, you'll find a clustering, or you might find this one leader is only coming up with things that would fall within their purview. Or, "Oh my goodness, all of our leadership team — everything they came up with would require a whole industry coalition. No wonder we're struggling with where to get started, because it's all lofty." It's good that we have those kinds of visions, but now we can constrain ourselves to say we also have to come up with stuff that's more near-term, right? So this is the framework. There's over $20 billion of investment that has been invested on the future-solving methodology — not directly to me, right, that would be nice, but — and this is one of the core pillars of doing that: first going through this and a series of other exercises to walk away with a portfolio of visions worth solving for.
Drew: Got it, really interesting. And just a plug for you — you actually run these workshops for companies, right?
Brian: Yes, workshops and even broader scale, like self-paced resources and transformations, when a company wants to ditch OKRs and replace it with future solving.
Drew: Okay, so beyond that, is there anything else — while we have your deck here — is there anything else that you want to cover, Professor?
Brian: Let me see. All right, let's see. Let's discard — let's say a vision would be, and I'm trying to make it visceral. I don't know what kind of companies — I know, obviously, you have CMOs.
Drew: It's all B2B. It's a lot of SaaS, but not only SaaS, a lot of tech.
Brian: Let's say you set a vision where you said, we want our marketing, all the accumulation of all the marketing efforts that we do, to be so strong that we are actually asking ourselves, do we need an outbound sales team? Because it's just like overwhelmingly inbound, because we're doing such a good job, right? That's an example of what I mean about like, a visceral — like you, and imagine the phones ringing off the hooks, and what a great problem that would be to have to deal with. So let's say that's the vision that you're setting at the very top, and that's what a future point is. Yeah, everybody — as authors go through these journeys, and if anyone else has gone through writing a book, you know — I know you have, Drew — you know, sometimes you coin something, you name it something, then you realize, ah, it's more like, instead of the future point, it's the vision, right? Or it's tomorrow. It's the thing you're trying to go toward. But once you've defined that — so again, if we defined it as that our marketing is so good that we don't even know if we need an outbound sales team anymore — then you ask the question, what would have to be true for us to reach that future? And the next step after that — so you ask: what would have to be true? And so you start writing down the things that would have to be true. And then you say, okay, so if these four things were true, would we be in that future? Would the phones be ringing off the hook? Would we feel like we don't even need outbound sales anymore? Would we just reposition all those outbound sellers to becoming inbound sellers? And if the answer is yes, then you move on. If the answer is no, then you say, okay, well, if this were true, this were true, this were true, and this were true — what else would still be missing before we'd actually be in that future? And you document all of that at that altitude, and then say, okay, once you have it where everyone agrees — yes, if those seven things, or those five things were true, we'd reach that future — then you do it again, isolating each theory. So you might say, for theory — you know, this one over here — which is that, you know, a click down from that is that, you know, we'd have to have such a strong brand impression that every time someone saw it, they immediately knew who was making the commercial, or who was featured in the ad, or whatever. Just be such a strong brand. Then you might say, well, what would have to be true for that? Okay, well, distinctive brand elements, distinctive storytelling, distinctive whatever. And now you're coming up with the other hypotheses that might have to be true. And then you do it again and say, well, for our storytelling to be considered distinctive in today's era with AI and all the AI slop and everything else that's going on — everything's starting to look the same. In some ways it's easier to distinguish. In some ways it's harder. What else would have to be true? And you keep going down until you get to the point where you say, okay, well now everything below this line is stuff that's already true. So this is just the starting point of everything that's already true. And we've documented all the other things that would have to be true in order for us to go from where we are today — which is the starting point — to where we want to go, which is the vision or the future point. And what this does is it creates an actual — unlike anything I've seen in any organization I've ever worked with. Every time I say, can I see your strategy — which is a fun question, just ask it — it's like, well, what do you mean? Like, you want us to show you the deck McKinsey made for us? Or, well, it's a memo, or it's a series of PowerPoint decks that describe the context we're in and the areas we want to play and some of the investments we're making — which I would argue is a highly contextual plan, but not a strategy, right? So a strategy is something you document as a means of going from where you are to where you want to go. And this is the only framework I'm familiar with that puts that all in one single sheet of paper that you can show someone and say, here's our actual strategy. This is where we're trying to go. This is where we are. These are all the things we think would have to be true. Oh, and then, after that, you can say, here are the things we're investing in right now. Here's the stuff we disproved. Here's a whole branch that we're going to see if we can get a strategic partner to go invest in, because it's not a core competency to us. Here's something we found that the National Science Foundation is already investing in, so we're not going to spend our capital — we're just going to monitor it. Or maybe, in your case, the NRF is already doing some kind of cool research project, so we're just going to watch that from afar, not spend money trying to answer that question while we focus on the stuff that is core competency to us. Cool — you said Professor, so I tried to go into as much of a professor mode as I could.
Drew: I love it. And so strategy is the bridge from where we are to where we want to be. And if you don't have that, you don't have a strategy. I love the simplicity of that. I want to make sure that we get this — that was big picture, and I think that's really important, because we talk about, if we're going to be driving big change, you're going to need to do that. I still, at the same time, want to make sure we give CMOs some concrete things that they can do. I'll start with one. Sam Altman has said repeatedly, you know, the billion-dollar company with two people — it's happening. Every department will be smaller and more efficient thanks to AI. And CEOs, particularly Silicon Valley CEOs, are hearing that, listening to it, and that translates into: hey, marketing, cut your team by 50%, right? Not vision — just, you should be able to do it much more efficiently. It's not, what can you do better? It's not strategy. It is cut costs, because you should be able to do that. And first of all, there's no business case out there that shows that somebody actually cut their marketing staff by 50% and actually grew the next year. In fact, there are very few cases at all where any kind of marketing cut actually led to growth. But nonetheless, that is the prevailing wisdom. So how does a CMO fight that right now? Because it feels like prevailing wisdom. If you are a forward-thinking thing and you are working — God forbid — you are at OpenAI or Lovable, and you're at a nine-to-nine-to-six, which I heard that expression today for the first time. And I went, oh my god. Yeah, that's the whole thing? That's a thing. So obviously, if everybody was working nine-to-nine-to-six, you would need a tenth of the people — some EMTs standing by.
Brian: Yeah, no kidding. So the question is really: how do we — if you're a marketing leader, you're getting the edict from either your CEO or the board — figure out how to just cut insane amounts of cost out of marketing? The first thing I would say is that you're right, that that's not a vision at all, and so it's not visionary leadership. Microsoft is a great case study, because they went from — in 2014, when Satya came in as new CEO — he set a vision for the future of the organization, and he executed it, honoring the history but also forging a path to the future. He said, we need a new people strategy, a new technology strategy, and a new business strategy. He didn't just say a new business strategy or a new technology strategy — but also people, which is really interesting. There are a lot of lessons, and I was able to see some of those firsthand. What's interesting is they 10x'd their value in about 10 years, right? It's even better now. The last time I checked was a couple years ago — it had been 10 years — and they 10x'd their value in those 10 years. So it's interesting, because if you think about where Oracle and where IBM and a bunch of other companies that are seen as legacy tech companies were — they weren't that far behind. If you go back and look at where they all were in 2014, there wasn't that big of a difference. And then if you look from 2014 to today, the way Microsoft has absolutely exploded, right? And I might tell the story to a C-suite exec and say, do you think that was because Satya came in and said, let's cut our marketing spend by 50%? No, not at all, right. He had a vision for the future of the industry as a whole. Things are moving to cloud. We need to change the way we're doing licensing. We need to change the way that we've traditionally tried to be, like, anti-walled garden by not being willing to put our software on Apple devices. We need to change that. We're spending all this money in mobile and we're losing — we need to cut bait, let it go. We need to double down on things we're really good at, et cetera, right? So if you're working for the kind of leader with whom you can have that conversation, frankly, have that conversation and say, we have a new impetus with new technologies that are capable of insanely cool things that we couldn't do before. It's a whole new palette — you know, it's like, if you're a painter and all of a sudden a bunch of new colors appear that are, like, three-dimensional or something, and you're like, wow, I can't believe I can actually paint with these things. You have a choice as a leader to say: am I going to try to cut costs with this and sort of take the value we're already creating and try to keep creating the exact same value a little more efficiently, and post good numbers for a few quarters, until someone else comes up with a completely new way of doing things now that we have this new palette that just absolutely undercuts our whole business? Or do we want to be the ones — because we're already there, we already have the resources, we already have the market share — that if we can get there first at finding new ways to create value with all the capability that's out there today, we can create whole new growth markets for ourselves? And so that's the first thing I would do: have that conversation with them. The second thing I might do, if they continue to push back, is — there's this thing in cognitive behavioral therapy where you go: someone says, oh, I'm afraid that I'm a bad father, let's say, and you say, really? And what does it mean if you're a bad father? Well, it means I'm letting my children down. What does it mean if you're letting your children down? Well, it means — and you just keep going until you get to the root thing. So you could do a version of that — don't call it cognitive behavioral therapy — with your CEO, but you could do a version of that with this idea and say, well, where did you hear this idea that you can cut your marketing spend by 50%? Oh, well, I heard it on this podcast. Okay, well, who said it on that podcast? Well, it was Sam Altman. Okay, does Sam Altman work in marketing? No, he doesn't. Has Sam Altman ever worked at a company of our size? No, he hasn't. Okay. So this person who's never been in the context we're in is making some kind of bold declaration about completely changing another industry. It's like, if I, as someone who has never worked in a steel mill — and I'm coming back to steel — said, you know, five years from now you're not going to need any more workers in steel, because it'll be fully automated, it'll all just be autonomous. And hey, I'm the author of Autonomous Transformation — come on, right? But it's like, I don't know that. I have no context with which to make that statement. So I think discernment is one of the most important skills in 2025 and in this decade. I think discernment will be one of the most important skills. And sometimes you have to bring others along when they're showing a lack of discernment, just talking them through it — the Socratic method, right? Like, well, really, where'd you hear that? It's a fine line, because you don't want to sound judgmental, or be —
Drew: Where did you hear that, stupid?
Brian: Yeah, exactly. Right. Just say, like, oh, that's so interesting — can you share where you heard that? And then if they say, oh well, actually, yeah — you don't have to say, can you share the well-documented case study that had a broad — you don't have to cut them down. You can just say, can you share that with me? I hadn't heard that. And if they send you a link to, like, some kind of tech podcast that features an AI CEO who's only ever worked in AI startups, then you can just ask and say, you know, do you have any examples? It would be helpful for me if you could share some examples of other CMOs at companies like ours who have done this and are talking about it. And you know, you can go from there, right?
Drew: Yeah, you know, I think there are a lot of startup CEOs out there who will say, my vision is to cut the staff by 50%, go. And they'll just say, that's my vision. And you're not going to have — it's going to be a tough time arguing it. And they don't have the experience; they say we're recreating everything from magic. I wonder if there's also a flip side of this — I understand that you want to cut our costs by 50%, and I do too, but all of that is in the purpose of growth, correct? So if I could show you that with the team that we have, we could actually grow faster as a result of those teams, would that vision be in line with you? Because I think that is part of the conversation. There are efficiency gains. I don't know if there's a diminishing return on efficiency gains. I don't know if we're talking about 50%, 20% — but there are efficiencies.
Brian: Yes, and I agree. And I think I would add to that that you can also document what — another way you could approach it is to say, all right, that's one option — Option A: cut 50% of the workforce, or whatever. Option B, and then share other options — like, Option B is something I would do: the bold vision. Do that visioning exercise I shared with you and your leadership team, and come back to them and say, Option B is this vision for the future of how we're going to market and reach our customers. And that's Option B. And here is the potential growth that I think we could get to if we were able to achieve that. Another factor that I think is being left out of these conversations is the cultural impact, right? Because people's performance in an organization is based on their sense of safety, their sense of cohesion, their sense of alignment, right? Gen Z more than previous generations, and Gen Alpha even more than that — there is a tangible, actual cost to reducing headcount beyond the actual number of people that have been laid off. You used to be able to do it somewhat quietly. We'd go, oh, I didn't — yeah, I guess there were some layoffs. And now Linda's not in our office anymore. That's too bad, right? 30 years ago, okay, you don't really know. Now, you know: oh yeah, 10,000 people were just laid off. Oh my goodness. And wow, so that could have been — oh, this is the fourth 10,000-person layoff in our company in five years. That could so easily have been me. And so the psychological safety impact of that, and the way it breaks apart teams and all of the work that those people had been championing and working on just being left to kind of be there as technical OneDrive debt for the organization to try to navigate — I think that is a very serious cost that is very difficult to quantify. But I think that when people are saying, let's just cut 50% of the workforce, they're leaving that off — they're not thinking about that.
Drew: Yeah, it's so funny. I mean, this is a much easier conversation if the CEO can connect marketing to revenue, if the CMO has done that — because if we can show that marketing is delivering not just 90% of pipeline, but 90% of revenue, these are different conversations. But if there is this disconnect, at least from the CEO's standpoint, in connecting the dots to the bottom line — the things he's going to be measured on — these are much harder conversations. Anyway, we're going to wrap up. And we're going to wrap up with two do's and one don't for CMOs when it comes to approaching AI and autonomous transformation.
Brian: Number one to-do is start with vision. For any project that you have right now, ask yourself: is there a vision that this is tied to, or is this just something we're doing because we think it might increase a number somewhere? So start with vision, and recalibrate everything you're doing to some kind of overarching vision. It's okay to have more than one. Another to-do is to make your strategy visible. So often we've just accepted that strategy is invisible everywhere, but we'll feel more connected to what we're actually doing and be able to think more clearly if we can actually see what the strategy is and talk about it. The one don't that I'll add — that we didn't cover in this session but I think is critically important — is do not pursue use cases or low-hanging fruit. A use case, as I like to put it, is the friend of engineering but the enemy of strategy. It's good when you're trying to build a product — you need use cases to figure out if the product will ever be useful to anyone. But when you're trying to develop a strategy for what you're trying to do, starting with a use case instead of a vision automatically undercuts everything, and you end up doing something that almost never is actually transformative or extremely valuable,
Drew: Right. But you might get a 2% efficiency improvement. It's just not visionary. And I think the key conversation that we're having today, and that I want to leave our listeners with, is: at the grown-ups' table, at the C-suite, you as a CMO are leading with vision, and you're helping those around you have vision. And so the more that you connect your AI to vision, the better. And so if you just talk about these smaller use cases, it's no different than you talking about smaller marketing wins. We're trying to elevate the conversation into something that's transformative. Brian Evergreen, thank you so much. Where can people find you?
Brian: LinkedIn is the easiest place to find me. Feel free to add me, feel free to reach out. But yeah, LinkedIn is the easiest place.
Drew: Awesome. All right. Well, we appreciate you and all of the insights that you have provided.
If you're a B2B CMO and you want to hear more conversations like this one, find out if you qualify to join our community of sharing, caring, and daring CMOs at cmohuddles.com.
Show Credits
Renegade Marketers Unite is written and directed by Drew Neisser. Hey, that's me! This show is produced by Melissa Caffrey, Laura Parkyn, and Ishar Cuevas. The music is by the amazing Burns Twins and the intro Voice Over is Linda Cornelius. To find the transcripts of all episodes, suggest future guests, or learn more about B2B branding, CMO Huddles, or my CMO coaching service, check out renegade.com. I'm your host, Drew Neisser. And until next time, keep those Renegade thinking caps on and strong!